Goto

Collaborating Authors

 move fast and break thing


NVIDIA's AI team reportedly scraped YouTube, Netflix videos without permission

Engadget

On Monday, 404 Media's Samantha Cole reported that the 2.4 trillion company asked workers to download videos from YouTube, Netflix and other datasets to develop commercial AI projects. The graphics card maker is among the tech companies appearing to have adopted a "move fast and break things" ethos as they race to establish dominance in this feverish, too-often-shameful AI gold rush. The training was reportedly to develop models for products like its Omniverse 3D world generator, self-driving car systems and "digital human" efforts. NVIDIA defended its practice in an email to Engadget. The company equated the practice to a person's right to "learn facts, ideas, data, or information from another source and use it to make their own expression."


The AI arms race is on. But we should slow down AI progress instead. - Vox

Stanford HAI

"Computers need to be accountable to machines," a top Microsoft executive told a roomful of reporters in Washington, DC, on February 10, three days after the company launched its new AI-powered Bing search engine. Computers need to be accountable to people!" he said, and then made sure to clarify, "That was not a Freudian slip." Slip or not, the laughter in the room betrayed a latent anxiety. Progress in artificial intelligence has been moving so unbelievably fast lately that the question is becoming unavoidable: How long until AI dominates our world to the point where we're answering to it rather than it answering to us? First, last year, we got DALL-E 2 and Stable Diffusion, which can turn a few words of text into a stunning image. Then Microsoft-backed OpenAI gave us ChatGPT, which can write essays so convincing that it freaks out everyone from teachers (what if it helps students cheat?) to journalists (could it replace them?) to disinformation experts (will it amplify conspiracy ...


Why leading researchers fear AI will wreak even more havoc than social media

#artificialintelligence

When Sam Altman was sunsetting his first startup in early 2012, there was little indication that his path ahead would parallel that of Silicon Valley's then-wunderkind Mark Zuckerberg. While Altman was weighing his next moves after shutting down Loopt, his location-sharing startup, the Facebook CEO was at the forefront of social media's global takeover and leading his company to a blockbuster initial public offering that valued Zuckerberg's brainchild at $104 billion. But just over a decade later, the tables have dramatically turned. Nowadays, the promise of social media as a unifying force for good has all but collapsed, and Zuckerberg is slashing thousands of jobs after his company's rocky pivot to the metaverse. And it's Altman, a 37-year-old Stanford dropout, who's now seeing his star rise to dizzying heights -- and who faces the pitfalls of great power.


Move Fast and Break Things? The AI Governance Dilemma

#artificialintelligence

AI is the future and there's lots of money to be made from it. But organisations keep making the news over AI governance failings, such as Microsoft's chatbot that turned racist and google images labelling African-Americans as gorillas. We're seeing a growth of ethics and governance councils but with mixed success - Google shut theirs down. Why is good governance proving so difficult? Does there have to be a trade-off between good governance and innovation? The market for AI is has been forecast to grow from $9.5bn in 2018 to $118.6bn by 2025. Naturally there is a race to get to the opportunities first.


How to operationalize AI ethics

#artificialintelligence

Fairness ProjectLast week, I moderated a panel at TWIMLcon about how teams and organizations can operationalize responsible AI that combined perspectives from three people from different corners of the tech and AI community. Rachel Thomas is best known as cofounder of fast.ai, a popular free online deep learning course. In recent months, Thomas was named director of a new organization that mixes research, policy, and education called the Center for Applied Data Ethics at University of San Francisco. Guillaume Saint-Jacques is a senior software engineer at LinkedIn's Fairness Project, an applied research team that assesses the performance of the company's AI systems. Parinaz Sobhani is director of machine learning at Georgian Partners, an investor in SaaS startups with its own applied research lab to help portfolio companies apply machine learning.


Artificial stupidity: 'Move slow and fix things' could be the mantra AI needs

#artificialintelligence

"Let's not use society as a test-bed for technologies that we're not sure yet how they're going to change society," warned Carly Kind, director at the Ada Lovelace Institute, an artificial intelligence (AI) research body based in the U.K. "Let's try to think through some of these issues -- move slower and fix things, rather than move fast and break things." Kind was speaking as part of a recent panel discussion at Digital Frontrunners, a conference in Copenhagen that focused on the impact of AI and other next-gen technologies on society. The "move fast and break things" ethos embodied by Facebook's rise to internet dominance is one that has been borrowed by many a Silicon Valley startup: develop and swiftly ship an MVP (minimal viable product), iterate, learn from mistakes, and repeat. These principles are relatively harmless when it comes to developing a photo-sharing app, social network, or mobile messaging service, but in the 15 years since Facebook came to the fore, the technology industry has evolved into a very different beast. Large-scale data breaches are a near-daily occurrence, data-harvesting on an industrial level is threatening democracies, and artificial intelligence (AI) is now permeating just about every facet of society -- often to humans' chagrin.


Artificial stupidity: 'Move slow and fix things' could be the mantra AI needs

#artificialintelligence

"Let's not use society as a test-bed for technologies that we're not sure yet how they're going to change society," warned Carly Kind, director at the Ada Lovelace Institute, an artificial intelligence (AI) research body based in the U.K. "Let's try to think through some of these issues -- move slower and fix things, rather than move fast and break things." Kind was speaking as part of a recent panel discussion at Digital Frontrunners, a conference in Copenhagen that focused on the impact of AI and other next-gen technologies on society. The "move fast and break things" ethos embodied by Facebook's rise to internet dominance is one that has been borrowed by many a Silicon Valley startup: develop and swiftly ship an MVP (minimal viable product), iterate, learn from mistakes, and repeat. These principles are relatively harmless when it comes to developing a photo-sharing app, social network, or mobile messaging service, but in the 15 years since Facebook came to the fore, the technology industry has evolved into a very different beast. Large-scale data breaches are a near-daily occurrence, data-harvesting on an industrial level is threatening democracies, and artificial intelligence (AI) is now permeating just about every facet of society -- often to humans' chagrin.


Artificial stupidity: 'Move slow and fix things' could be the mantra AI needs

#artificialintelligence

"Let's not use society as a test-bed for technologies that we're not sure yet how they're going to change society," warned Carly Kind, director at the Ada Lovelace Institute, an artificial intelligence (AI) research body based in the U.K. "Let's try to think through some of these issues -- move slower and fix things, rather than move fast and break things." Kind was speaking as part of a recent panel discussion at Digital Frontrunners, a conference in Copenhagen that focused on the impact of AI and other next-gen technologies on society. The "move fast and break things" ethos embodied by Facebook's rise to internet dominance is one that has been borrowed by many a Silicon Valley startup: develop and swiftly ship an MVP (minimal viable product), iterate, learn from mistakes, and repeat. These principles are relatively harmless when it comes to developing a photo-sharing app, social network, or mobile messaging service, but in the 15 years since Facebook came to the fore, the technology industry has evolved into a very different beast. Large-scale data breaches are a near-daily occurrence, data-harvesting on an industrial level is threatening democracies, and artificial intelligence (AI) is now permeating just about every facet of society -- often to humans' chagrin.


John Hennessy on the Leadership Crisis in Silicon Valley

WIRED

John Hennessy is the chairman of Alphabet, the parent company of Google, and the former president of Stanford. He's just published a fascinating new book, "Leading Matters," and he agreed to sit for an interview about his experiences. Nicholas Thompson: In the book you talk about a growing leadership crisis, and you mention some industries that have been faltering. Did you leave it out deliberately, or do you think there is a leadership crisis in Silicon Valley? John Hennessy: The valley has its share of leadership crises. And I think there's also a growing challenge that these companies have now gotten to the size where their influence on the public is much larger.


Techstars: AI startups must be wary of 'move fast and break things' mantra

#artificialintelligence

Techstars, one of the largest startup accelerator organizations on the planet, currently has 40 accelerators in 25 cities around the world. Some of its startup accelerators focus on helping early-stage companies grow, while others aim for specific sectors, like programs run with Amazon for Alexa startups and Target for retail companies. This fall, Techstars will open its very first AI startup accelerator in Montreal, a city that in the past year has welcomed Facebook AI Research, Microsoft Research, Google AI, and as of last month Samsung Research. The accelerator is being launched in tandem with Real Ventures, a prominent seed fund investor in Montreal, and includes advisors from Google, as well as Element AI and other companies that call Montreal home. In roughly the past year, Google AI, Microsoft Research, Samsung Research, and Facebook AI Research have all opened research labs in Montreal.